perm filename ANALOG[RDG,DBL]10 blob sn#645902 filedate 1982-02-26 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00027 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002	  Analogies, and things like that...
C00004 00003		Outline
C00006 00004		Abstract-ette
C00007 00005	Why am I doing this?
C00012 00006		Overview of this document
C00020 00007		Misc comments, eventually to be merged in
C00022 00008		III. Naive Analogy
C00035 00009	Things belonging in the Meta-KB
C00039 00010		Examples of Abstraction
C00050 00011	@Section(Basis for a Good Analogy)
C00067 00012		?. Abstraction - description
C00075 00013		Towards a Definition of ABSTRACTION
C00084 00014		Multiple Abstractions
C00087 00015		Where does this go -- talking about various uses of abstraction...
C00093 00016		IV. Tying ANALOGY to ABSTRACTION
C00101 00017		VI. Other ANALOGY work
C00106 00018		VII. Goals of this system
C00111 00019		(Goals, con't)
C00117 00020	The bulk of the research effort will be devoted to the task
C00119 00021	Method for :
C00121 00022		Conclusion
C00129 00023		Glossary
C00132 00024	Mike comments:
C00136 00025		Bibliography
C00137 00026		Outtakes
C00153 00027	@BEGIN(Multiple)
C00155 ENDMK
C⊗;
  Analogies, and things like that...

see also	NAIVE[the,rdg] for a beginning of a naive analogy treatise.
		EXAMPL[the,rdg] for body of examples
	Outline

I.  Introduction
  A. Motivation  - people use it all the time
  B. Overview of this paper
	- Purpose
	- Section by section outline

II. Research Programme
  A. Goals of the system
	Include Disclaimer
	- on type of algorithm; not psychological study or philosophical argument
  B. Programme steps
  C. Evaluation/Validation
  D. Details (time scale, domains, ...)

III. Naive analogy
  A. Properties
  B. Dimensions (types, ...)
  C. Examples

IV. Types of analogy
  A. Organization
  B. What they have in common
  C. Which this program(me) will cover

V. Comments on Analogy Systems
  A. Ubiquity - NL, ...
  B. Other AI can be viewed as limiting cases of this defn

VI. Goal behaviour of program
  A. I/O Pairs
  B. Two modules

VII. Actual program design

VIII. Conclusion
  A. Ignorance
  B. Future directions

<<<Appendices>>>

A. Glossary

B. Bibliography

C: Further Examples
	Abstract-ette
This thesis describes a general mechanism for generating and evaluating analogies,
together with the implementation of such a program.
This process is guided by a explicit corpus of heuristics,
which the user can adjust to produce analogies which fit his specifications.
Why am I doing this?
	(perhaps with different emphasis)

This work deals with the use of analogy, during the second phase of
Knowledge Acquisition -- (using JSB's three fold scheme).
This section supplies a brief motivation -- why I am looking at
analogy, why analogy as a teaching aide, and why is it being used
for Knowledge Acquisition -- i.e. to input facts into a growing
knowledge base.

Why analogy?  The primary reason is my interest in this area:  few processes
seem as ubiquitous, or essential, to intelligent thought as the ability 
to form and understand analogies.
As I mention in @Cite(NaiveAnalogy),
processes as (at first glance) divergent as understanding language, 
formulating new scientific theories, and appreciating music
all seem to require a non-trivial analogizing ability.

Despite the great interest in this phenomenon, from philosophers and
psychologists as well as AIers,
there seems no concensus on how this process operates, 
or even on what, exactly, it is.

Why the use of analogy as a teaching tool?
In @Cite(NaiveAnalogy) I list a number of uses of analogy -- ranging from
communication and representation to "discovery".  The one I consider most
tractable (and, as I'll explain soon, most testable) is explanation --
where the speaker has an idea to communicate to a competent, if less
knowledgable, hearer.  As an (as of yet unverified) research claim,
I feel that the other (more sophisticated?) uses of analogy will all
make heavy use of this "module" -- that is, this particular facility
embodies the "central" (core, primary?) use of analogy,
which the other processes can utilitize and expand.

Why for a KA task?
I have two main reasons:
First, usefulness:
The current central theme in Expert Systems today is KA -- this represents
the major bottle neck impeding the development of these systems.
Corollary 1 claims that, gee, if only we had analogy, this task would
be so much easier...
Second, testability:
Hooking the results of an analogical derivation to an existing, running
expert system renders those results ?easy to test? -- the improved system
either gives meaningful results, over this new (sub)domain or it doesn't.
(Still leaves open a few issues -- like whether it cheated or not, and
whether it was really using analogy or not...)

What do I hope to accomplish?  First, some relevant, (scholarly?)
inter-disciplinary research on this <ubiquitous> issue of analogy --
one which sheds some light on this unfortunately sloppily pursued
area.  Second, a running program which may be usable by other researchers
for their projects.  Third, a clearer understanding, in my mind at least,
of some of the underlying processes which are going on in our heads
during "intelligent" activities, as manifest by our frequent, and easy
use, of this complex reasoning process, analogy.
	Overview of this document
To provide a grounding for the balance of this paper,
Section 2 will briefly sketch this overall programme.

Section 3 attempts to formalize what we mean by analogy.
It includes a taxonomy of "types of analogy",
as well as examples of this each (respective) type of analogy in use.
This particular research task will address only a (proper) subset
of all "legal" analogies --
this subset will be given here.

Section 4 will use a brief literature search to
demonstrate the appropriateness, and scope,
of this model of analogy.
We will see that a large number of existing
analogizing process (both AI and "natural")
are, in fact, but special cases of this approach.

Section 5 (finally) describes the goal program.
It begins will a sketch of the behaviour (i.e. input/output pairs)
for the target analogizing program.
We will then motivate why this, alone, is insufficient -- 
indeed, we make the claim that no single, unalterable analogizer 
can possibly be all things to all people.
Clearly the user must be permitted to input his own (inherently subjective)
criteria for ranking competing analogies,
or for generating apt analogies.
Each user should be able to modify this analogizer,
by simply enterring his particular set of heuristics.
This task - of facilitating the input and incorporation of these new rules -
is performed by the second, major module of the running program --
the analogy-criteria refiner.
It is this modifiability which distinguishes this system from
most other (AI) analogy systems,
and adds respectability to the overall program(me).

The final section gives the real meat of the paper.
Here we will describe how we plan to actually build
a computer program capable of using analogies --
more precisely, how to design the code needed to
generate and use relevant analogies between a pair of models,
for the purpose of communicating.
Each of its two, almost-independent pieces 
(i.e. the actual analogizer and the analogy-criteria refiner)
will be described to some detail here.

The conclusion section will provide a running description of the current
"state of the system", together with updates, and other rethinkings done
by the author after the bulk of this paper has been "written in cement".

Three appendices will follow.
Appendix A is a (partial) glossary, providing a list of our definition of
various terms.
Appendix B is the bibliography, and Appendix C completes the corpus of examples,
begun in Section 3.

----

The real purpose of this report is to specify a full research programme.
The subsequent sections sketch some first ideas on analogies,
which this programme is designed to address.
This section serves as a preface, summarizing our overall goals and aims --
to present the whole picture,
in which to ground the "details" which follow.

We will first present the overall goals of this system -- including both
what we intend to prove, and which arenas we are intentially NOT enterring.
Next is an outline of the steps we will follow towards acheiving these results.
The final part of this section gives some pertanent details -- things like
(our initial guess) at the timings for these steps,
what things we anticipate will be on the "critical path",
and some thoughts on the eventual implementation specifications --
such as representation language to use, or desired control regime.

A final warning --
because of the early location of this programme specification,
it will, necessarily, include a great many "forward references" -- terms
which will not be well defined until some later section.
The interested reader, unwilling to suspend his curiousity, (or unable
to make the necessary, if temporary) leap of faith,
is referred to the glossary provided in Appendix A to help fathom the
terms.
Otherwise s/he is encouranged to accept the terms as still-to-be-defined
entities, trusting that these definitions will indeed follow.

---

Our use of the possibly ambiguous terms, such as Analogy itself,
are based on the definitions given in Appendix A.
They are designed to provide a formal basis for discussing
what WE mean by these terms,
and to indicate whether some mapping is, or is not, an analogy 
(by our definition of the term).
It is within this framework that we
will talk about things like "good analogies", or "apt
abstractions".
{ftnote: We will see in Section 5 that there is some evidence that people do
in fact use something like this process;
and that many of the common uses of analogy
in various philosophical literature can be viewed as a simple case
of the general, encompassing definitions we will give.
However, such evidence should be used only to confirm 
the utility of this basic approach -- ie that Mother Nature found it useful,
and that observers have noticed (and skirted around)
something like this basic mechanism.}

	Misc comments, eventually to be merged in

---
Consider the fact that "analog" (as opposed to digital) is itself an analog of
(yep,) analogue.

---
Justifications are important.
	III. Naive Analogy
<see NAIVE.MSS[the,rdg] -- this to be eliminated>

Before presenting the meat of this thesis proposal, we will first list
a small collection of examples of things which we consider viable instances
of analogy or of an analogizing process.  The breadth and variedness
of these examples should point to the ubiqity of this phenomena in
our day to day activities...
In the first few cases, we will present a "quick and dirty" version of
a problem-solving method for handling such analogies.

i) Literary Metaphors:
a) Matching of "outstanding features"
Q: How is Oscar Wilde like the ancient Greeks?  Both were gay.

Q: How is the captain of a ship is like a bride (in her wedding dress).
Both have difficulty in navigating, at least when the bride has a
long train following her.

The analogies here are determined by asking what special features the
two analogs possess, and then seeing which of these are shared.
Notice there is no a priori someone would associate sexual preferences
with a society -- only in the case of the ancient Greeks is this a salient feature.

b) Direct Literary(sp) Metaphor

Here the match connects what people consider ordinary features, as opposed
to aspects which standout.
Even within this category there is a large range of generality, 
and of explicitness.  Some metaphors reveal little about the implicit
connection -- the "all x are like y" cases -- while others provide
that necessary mappings explicitly spelled out --
using something like "x is related to y because f(x) = f(y)".
	People are like birds.
	John is like a bird.
	John eats like a bird.
	John eats as much as a bird.
	Tom eats like a small, full bird.
	John eats as many sun-flower seeds as most birds eat.
	John ate as many sun-flower seeds on June 24 as Polly parrot ate that day.

This type of analogizing could be achieved by relatively simple
property list matcher.
This will work before we assume facts like 
"standard substantce & quantity consumed"
will automatically be included on the property lists of objects like people
and birds.

c) Indirect derived feature
Neither salient, nor immediately stored.
Eg American economy like shoe lace: both can ...

"Now is the winter of our discontent;
Made bright by the sun of Bolingbrook" - or something like that
	- connotations of winter

"Or what a rogue and peasant slave am I"
	- Hamlet isn't, but it is as if he was...
"Juliet is the sun, -- from the east --"
	- standard example [winston] - where brightness (irridaence and happiness)
	and position are played on.

`Solar metaphor' - explanation for myths -  popular ca 1890-1910.
Here connection unusal; unless one happened to have that focus.
Of course, there are many other possible connections, as there were in the first
"People are like birds" case.

(This standard game, or challenge -- any pair of things can be connected.)
Sometimes by virtue only of name.
(meta level)
Appropriate respnose is groan, and nod in agreement: "Yes those two things
do share that property, but why mention that feature?"

Here most would consider the connection "contrived"
-- eg probably not explicitly stored,
but straightforward to deduce once the other analog is present.

The algorithm here must be more complex:  After looking at the salient features
of the objects, and then at all "primitive" attributes, one would be forced
to hunt, using the other analog as a basis for generating new properties.

d) Sometimes one has to probe even more into the deep nature of some phenomona
to see the mapping:
	"Learning at CalTech is like trying to sip water from a fire hydrant"
In both cases one is trying to ingest (a small part of) of something,
in a situation where a large quantity of that "substance" is being forced out.

	"sword of damacles"
-- itself used as a metaphor of immenent danger,
which is associated (unavoidably) with some power.  Extended to any
"tense" stree-producing situation.  (Or only certain other parts borrowed:
eg the danger which could have avoided by ....   eg my desk having a point 
at small of back high location...)

ii) Extended meaning of terms
a) Natural language
Many terms are stretched from original meaning, expanded to handle some new
situation.
Consider a term like "ancestry trees".
Note that biological trees have lent their name to this other
instance of a hierarchy.
Computer Scientists have also exploited this familiar example of a
well-founded ordering in describing trees, using terms like
root, leaves, branchiness, (no, not bark).
Or "he was feeling down", where we use up-down spatial terms
to describe the linear ordering relation of emotions.
The "time is money" is similar (analogous?) -- again we use the
vocabulary from one domain to describe another.  (See Lakoff/Johnson "Metaphors
we live by"]

CS members have borrowed and incorporated a wealth of terms -- many are now
sufficiently ingrained that it is difficult to remember their "etymology"
[Note almost all examples of things within `"'s are being used semi-metaphorically.]

Anatomy, Physiology, Diagnosis - once only for people, now for computers

or the other way (where the terms originally (well, mos recently) concocted
to describe computer situations or operators are being applied (at least by
CS people) to people:
 channels, bus, swapped out, 

Note the Cognitive Scientists who view both people and machines are "brother"
information processors give some credence to these mappings.

b) Mathematics, etc.
This same phenomena ocurs in mathematics as well.  Consider the way
mathematicians extend familiar functions to apply to new domains:
The + operator was originally meaningful only over reals,
but now this same symbol,
and much of its associated semantics (eg it arity of 2, cummutativity,
and often iverses and distributive and definitional relation to "x")
have been borrowed by other fields - such as fields, rings, groups.
(EG matrices, transfinite ordinals, sets, ...)
That "x" operator has been similarly extended.
There seems an underlying unity to all uses of a given symbol: for example
+ traditionally plays the role of a commutative operator over a set,
with an identity in that set.
Times, while often cummutative, is not alwasy -- eg matrices, or cross product.

Planning, Diagnosis

d) Sometimes it must be done explicitly - by actually describing
the connection: (where both terms given)
Bachelor bear - meaning extended.

c) The task might go some other way -- as in the familiar
"What are the x of y?" type of question,
where x is (well-)defined only in field z.
[How to "extend" x to the alien field y.]
Language : Phoneme :: Music : ?
? might be Interval (Halperin's conjecture), Timbre, or ...

iii) ?
Circle is like a sphere
Recursion is like iteration
Abstraction is like simplificiation/...

Music is like poetry
Music (as sequences) are like math sequences

iv) Relate to situation -- abstract out the salient essense of this situation,
and use this to describe...

Like a Fiddler on the roof -- like day to day existance
	- trying to do delicate thing, in precarious situation.

Feedback

Simulation - occurs in computer, logic (cf Steve example from Kechris)
	showing NP-completeness

To get from A to B (eg when solving a problem)
taking the circuitous route 1,2,3, rather than the "obvious" 4
	   2
	↑ -→- ↓
      1	|     | 3
	.     .
	A     B
  Instances: 1 = Meta, Problem Reformulation, Simulation,
	finding mapping between pairs of theories going thru some model

-----
Anyway, any model of analogy should be able to cover (that is, be able
to explain) such examples as these.  The fact taht some many disciplines
utilize such a process leads us to view this process as very important,
and underlyng much of our cognitive activity.  We hope the method proposed
in this thesis will be worthy of ...
Things belonging in the Meta-KB

This particular example indicates the sort of facts which we now
feel will be useful for this type of analogizing:
@BEGIN[ENUMERATE]
a large vocabulary of relations between relations -- such as the
TransitiveClosure function used above, or things like Inverse,
Composition, Plussing, ...  
(This could go for several layers,
to define commonality between starring and plussing...)

criteria for evaluating "complexity" -- ie why is it more relevant
to have relations which involve corresponding spaces
than merely the same number of relations, or relations which match
only in arity.
@FOOT{There are times when the "exact match of arity" is inappropriate:
for example, one may want the binary ON@-(2)(x y) relation in one language
to match the tertiary ON@-(3)(x y situation) relation in another.}
@END[ENUMERATE]

	Examples of Abstraction
Abstractions:
	Hierarchy, DAG, Directed Graph, Graph, Extended Graph (w/n-ary rel'ns)
			  \→ Weighted, Directed Graph

Note that this abstracting idea is not something to worry about --
even our perception is not "unbiased" and pure, but in fact has undergone
much processing which throws away "noise" and other ...
@Section(Basis for a Good Analogy)

Basically the formal definition given for analogy intentionally 
says nothing about how to rank competing analogies,
or about how to generate apt analogies.
The criteria for appraising the accuracy of an analogy
are by no means universal --
rather it is intrinsically subjective and imprecise.
With this in mind,
we propose a filtering process
based on a *user-modifiable* set of heuristics.
These will be used both for differentiate apt from "trivial" analogies,
and for pruning inappropriate analogies during generation stage.

The real meat of the paper will follow this lengthly "introduction".
Here we will describe how we plan to actually build
a computer program capable of using analgies --
more precisely, able to
generate and use relevant analogies between a pair of models,
for the purpose of communicating.
This program will have two large, almost independent pieces.
One is the actual analogizing system.
This initial program will (necessarily) embody the author's particular prejudices.
The purpose of the second part is modify this analogizer,
by facilitating the input of different heuristics from other users.
It is this modifiable which distinguishes this system from
most other (AI) analogy systems,
and adds respectability to the overall program(me).
[[These new heuristics will modify the analogizer itself,
  honing it to generate better (or at least different) analogies.]]
Hence any user can enter his biases, and see the system produce analogies
he considers reasonable.
(Safeguards will be present to restrict the rules the user may enter,
to insure the analogies generated fit
within the liberal definition we give for an acceptable analogy.)

 definition is sufficiently general to cover a wide
range of "understandable" (if silly) analogies; both good and bad.
	?. Abstraction - description

	<<work on>>
Analogy is intrinsically semantic process -- and all the tools we have,
in a computer system, are inherently syntactic.
In particular, all of the terms come pre-defined, (ie no one knows
how to form new terms, ...)
[ie only matching, ... goodness ordered by semantic constraints]
A priori selection shows analogy involves finds appropriate abstractions.

In this short report we will propose a general process which we feel is
sufficient, (and perhaps necessary) to any system which claims to be capable of 
generating and understanding analogy.
Further, we will show the generality -- indeed, ubiquity -- of this basic process;
and claim it is a necessary component for any competent intellect,
man or machine. (see [EAF - IJCAI lecture])
Finally we will propose a mechanism for analogizing,
based on this abstracting process.

Consider phrases like a "bachelor bear" 
-- which, while "stretching" the standard definition of bachelor,
is still trivially understood by just about anyone.

It's easy to find "selective advantages" for this ability --
most prominently, it economizes on storage of facts, and provides
small ready handles on large chunks of data.
But what is the underlying mechanism for this process?
What types of background data must it have available, and in what organization?
And what assumptions must be made about the world for this analogizing capability
to be useful?

	Towards a Definition of ABSTRACTION
Overview

In this paper, we will regard analogy as a shared common abstraction.
That is, we will consider two objects (or events, processes, etc.)
to be analogous only if they 
share a common abstraction.
This section help crystalize this notion of abstraction.

A warning:
This defination of abstraction is quite general -- 
indeed, almost to the point of being vacuuous.(sp)
We will soon see that this is to be expected -- it is well known that
people can easily find analogies between any pair of objects

The point of this thesis work will be to provide a mechanism for testing...

Terminology

For economy of representation, people often simplify the world they
perceive by factoring out commonalities.
There is no reason to "store" the fact that Peter breathes air, and
Paul breathes air, Mary breathes air, ... when the underlying idea
is that people breathe air, and that Peter, Paul and Mary are all people.
Given this, 
a simple inference procedure can now answer any
standard question about substance-breathed.
We can similarly deduce facts about location-of-heart, or number-of-hands,
or approximate size,
to the single gestalt which houses facts about people in general,
and know that Peter, Paul and Mary will all "inherit" these facts.

These people-related assertions seem universal,
and, in some sense, guaranteed.  
Every person has two hands, for example.  Well, actually not.
The idea is that *most* people have exactly two hands --
by a sufficiently large margin that it will be
more economical to store that fact and 
and realize the additional cost associated with handling the exceptions.
There is a great simplicity associated with storing such almost-univeral
facts -- which offers an economy of storage, and of retrieval time inferencing;
when compared with other schemes (in particular, when compared with
storing all the facts at the leaves of the derived tree).

This "TypicalPerson" node may be considered an ABSTRACTION of the idea
of any given person.  
Realize this TypicalPerson does NOT represent a person out in the real world.
Many of the essential, defining facts pertanent to any real person
will be left out (such as gender) or underspecified --
eg the height value here is simply 4-6 feet, rather than a precise value.
Also not every fact will be universal -- some people have only one hand,
despite TypicalPerson's implicit claim to the contrary.

We may think of an abstraction as representing standard, default information
about members of some class --
in the manner this TypicalPerson relates to the set of all people.
We understand that "usually" any such fact will be true about any
given member of that class.
Nothing is guaranteed, only suggested.

In this simple case, the notion of Person had already been defined.
Other abstractions may implicitly define a new category --
for example,
to deal with the usage of "bachelor" given in the example above,
we may want to talk about "male animals who have no current mate".
These extended-bachelors do indeed form a class, but not one which
any of us would, a priori, have considered.

The basic abstracting process, as these examples imply, is rather
simple:
it amounts to throwing away some of the details about an individual
(or perhaps an individual concept); to leave the necessary "essense" of 
that concept.
Hence TypicalPerson embodies only a subset of the facts true about Tom
(or any other given person), and the typical-extended-bachelor has
only gender (which is male) and basic classification (here animal)
in common from its deriving "bachelor = unmarried male" ancestor.
Other facts are blurred -- made less precise (or abstracted) --
size goes from an infinitely precise numeric value to a general range;
and the concept of "married" is extended to mean "mated with".

We will give more precise details about this process in the following
sections.  For now it is sufficient to regard an abstract of X 
as some intensional object (later, theory) which maintains the
"important" features of X, and throws away everything else.
	Multiple Abstractions

The scare quotes in the preceding pargraph were intentional.
It's not always clear which properties of some object are essential,
and which simply the chaff.
Worse, for different applications, it will be clear that a different
set of facts should be kept.
When viewing Tom as a student, his physical height is totally irrelevant
[* of course, one can always come up with mitigating circumstances
when a student's height is important -- such as in a PE class, or if
he's a student in Boot Camp which does have height specifications for
all cadets.  But in general, these are pathelogical cases; which I
will refrain from throwing in, henceforth.]
while his GPA is an important specification;
On the other hand, actual physical height is of tantamount consideration
when the physical object, Tom, is blocking the light.

	Where does this go -- talking about various uses of abstraction...
	UBIQUITY OF ABSTRACTIONS  (more motivation)

In the next section, when defining our model of analogy in terms of abstraction,
we will see
that the hard part in analogizing is figuring out the appropriate abstraction
to employ.
The rest of this section is devoted to providing further evidence about
the commonness, and usefulness, of abstract.

Our raw, sensory perception does a good job of abstracting our
inputs, at many levels.
No one remembers EVERY detail of a conversation -- only the main thrusts.
Nor could anyone reconstruct every tissue, fiber and organ of the person
he just saw -- even though he might think he "knows" that person.
Of course, even given total recall, people do not observe every possible
measurement, anyway.  Our auditory abilities are 
good only in certain frequency ranges, as are our visual sense.
Moving from rods and cones, neurophysiologists have found a tremendous
amount of processing which occurs along the optical nerve, long before
the intial "pixel level" signal reaches our brain --
in some sense both "weeding out" extraneous information, and encoding
the information in "higher units" -- in terms of edges or regions, rather
than points.
(Note this is another place our hardware has "pre-decided" -- these
higher level primitives might have been FFT of edges, or in terms of
more complex patterns (see Juleasz).
Q: Can people learn new types of primitives? - like with trick glasses?)
Hence our brain's input is, at best, but an abstraction of the real world --
we perceive a simplified, pre-packaged view of the external phenomena around
us.

This abstracting process can be in the form of a top-down guidance,
as well as this bottom-up "hardware" filtering.
As [Minsky] points out, the way we flesh out an image, or correct small
misunderstood sounds into words, uses a process which extracts only
certain characteristics and ignores others -- which smacks of abstraction.
This of course goes on at several levels - from phonemic (or region)
up through assumed 3-D objects (or sentences) to high level "understanding"
of the peripheral world.

--- more here ---
Common pedagogic idea: use a solid example 
-- as people abstract eneral ideas from specific rather well;
and, apparently, have a hard time going the other way.

---

This idea of abstraction seems quite similar to Quine's idea of
obstenstion.  He claims that classes are defined by examining certain
members, and then "extrapolating" from these instances -- apparently
having filtered out the noise -- here details particular to the 
individuals included in this sample.

Others have discussed object definition by extending  from individuals,
rather than "building up" from primitives.
To define the notion of "game", Wittgenstein collects a host of accepted
examples, then finds a "family resemblence" among them.
The single concept of game serves as an abstraction to each of these
diverse instances.
In "Primi...", Winograd also make the point that much of people's inferencing
is devoted to categorizing individuals based on "prototypes", rather
than on primitives.
These prototypes, holding many layers of default information,
are abstractions gleaned from examination of individuals.
	IV. Tying ANALOGY to ABSTRACTION

Alright - 
Let's now return to analogy -- defined as a shared abstraction.
The overall analogizing process seems pretty simple:
To find if given objects (or events, or ...) X and Y are analogous,
compare the abstraction of X to the abstraction of Y.
If these abstraction match, the original objects are analogous.
Unfortunately, any object may have many distinct abstractions;
and while some may be pre-stored, it seems that most are simply generated
as needed, during this matching process.
(Indeed, people seem very good, and fast, at performing this generation.)
Now the problem is seen to be much more difficult.
How does one determine any common abstraction between two individuals;
and how are analogies rated --
ie why is it so obvious which analogies are good and meaningful, and which
are obviously bad.

This rest of this section will discuss three issues which arise when 
concecting analogy to abstraction.
First this abstracting process is probed in more detail,
realizing criteria like appropriateness -- in the context of trying to
find the abstraction to use for some analogy.
(especially when used to generate...)
Then the actual matching process will be considered 
-- just how does one determine that two abstractions are indeed equal.
Given this overview we will sketch the basic algorithm we propose be
used for analogizing -- this will be considerably fleshed out in the next
few sections.

	Which abstraction
A given individual may have many different abstractions, each representing
some (distinct) specific perspective of that entity.
Each such abstraction is useful for deducing facts about that individual
in some situations.
For some applications it is appropriate to regard Peter as a physical object
(eg when we notice his body is blocking the light,)
while for others his student perspective is much more pertanent
(eg when determining his GPA).
Similarly a building may be a physical object, or a memory-filled object
(eg a Home.)

So given a pair of objects, how does one find the best common abstraction?
Well, this is clearly a heuristically guided search, but through what space?
And what are other factors to consider?

Given the usage of abstraction above, constructing an abstraction seems
a fairly trivial operation: simply removing some attribute.  Of course,
this does not guarantee the result will be at all useful.
<give an example>
Apparently the decision of which fact to remove, or which relation to contract,
is important. 
This leads to the second question above -- what other considerations must
come into play?  
Depending on this context, the same pair of objects may fit into quite different
abstraction, ...
(Eg the purpose of this analogy -- whether it be to inform, or to 
store, and to whom (perhaps between homunucli inside the brain) 
- is quite important.

	How to match abstractions
Technically, we insist that the two abstractions match exactly.
However, (for efficiency?,) we can relax this a little, and permit
some short distances to be traversed -- ie the final close fitting.

Change of variables is allowed.
What if one is an extension of the other: then use the ...
Perhaps the corresponding relations are not equal -- one may take
more arguments, for example.  Then consider the (slightly non-standard)
operation of restricting the more precise operator, to ...
...

	Basic process
Find commonalities of objects -- and use this to expand this common set into
a coherent abstraction -- ie add in features which must accomondate those
already present (relaxation).
This process may well be: develop along one dimension, then (pehaps)
retrench, when probing along another.

What about existing, pre-stored abstractions?
These useful to determine which facets go together -- in any other way?
Yes: as first guesses for way to expand feature set.

---- Other things to be said ---
*Types of analogy 
I assume (this is subject to verification) that the "goodness" of
the `X is analogous to Y' will depend on the A which is the abstractin
common to both X and Y.
(Note this A refers to the "real" abstraction, which, (sigh) need not
be the result my program will return.)
There are various criteria which can be used to judge analogies, or
astarction.

Good - clear cut and shart
	(not obvious ones are not interesting)
Bad - Strained -- the path was long, and unmotivated
	obvious

Necessary -- ie there really is some common underlying phenomena which
	forced this connection -- not jus serendipidy

*Ways of describing analogies
	Both are 4 legged creatues w/cold noses VS both are dogs.
	Note the second subsumes the first - and is thhrefore better.
	  [ie are deducable]

*What can be analogous - "syntactically"
Prototypes to Prototypes
Individuals to Prototypes
Individuals to Individuals

What about Relations to Relations?
 or are these just Individuals (like the constants)
	How about in the context of ssme other match...

	VI. Other ANALOGY work

	ubiquity
As the comments in the initial disclaimer implied, analogies seem
ubiquitous.
NL - see Lakoff's book -
 also TW's stuff, with Bachelor bear.

...
Analogies inherently communicative -- ie from one source to another,
at a high baud rate.  Note both sender and reciever may be internal.

Analogies occur at various depths.
Some are quite superficial -- for these a rudimentary feature matcher
is ambly adequate. In other cases, the connection is considerably deeper.
Consider the connection joining an organization to trees.
Here we need to notice that
	Branch (of tree)
corresponds to
	BossOf (of corporation).
Why?  Well, the reason deals with the fact that
both map from X onto Xs, and that their respective inverses
are only X to X (ie only onto one).
So far this only describes a loose multi-hierarchy -- what prevents
circles? or defines a unique top most node?

Well - the latter is straight-forward:
	Note that President is a special employee - who, by definition,
has no Boss.  Similarly Trunk has an isomorphic condition, with respect
to sprouts.  Now the question is how to find this, fast.

The answer may really be special hardware - which accomodates this
"spreading activitation" type of search.  Or may consider definitional
facts: eg Employee is DEFINED based on this type of employed relation --
which forces an immediate employer.  What of Tree? well, the branch is defined
from the point of seperation (ie of its branching).  So here we have something
special about the branches-from slot.  So maybe this does work...

	other systems
Common abstractions -- found because
Can be explained in this model -- things like simple Feature matching was 
successful because any set of common features points to something in common;
superficial as it may be.
Sets of features worked better - people, at least (for economy of "thought
space") find prototypes a good mechanism to follow.
Still perhaps at a possible superficial level.

Going a bit deeper: it is unlikely two items
would share some relation amongst n-tuples of their respective sub-parts just
by chance -- ie "chances are" there is some underlying reason for this.
PF: more variables ...

Extend this a bit farther, towards underlying causal models -- which, at this
abstraction, do match... Here not just relations, but perhaps 2nd order facts,
about relations amoung relations, which are important.
The problem is we have to find some SYNTACTIC method - dealling only with
the particular set of features/particular decomposition proposed -
which "instantiates" an inherently SEMANTIC property joining a pair of
entites, which is that property of analogy.  These earlier methods
looked good, but hedged around the real issue - of what is the actual
analogy.
	VII. Goals of this system

So much for overhead.  
While it did present arguments indicating
why analogy is a good, ripe area to study, the introduction
never mentioned just what analogy was -- ie what behaviour an
"Analogy Machine" must be able to exhibit.

Below I list the sort of operations which such a device must be
capable of performing.
All of them fit into the scheme portrayed below:
The Analogizer begins with a wealth of pre-existing knowledge, which
is used for all the queries.
The user enters some inquiry, and possibly provides some context as well
-- ie a statement of the purpose of this question.
(Note: there may be other ways of getting that context information...)
[Further note - this information serves to constrain the possible
abstraction/analogy generators...]
<<<Explaining to Z, talking to Z, storing away (ie what is best retrieval
index), poetic vs illustrative, causal vs seredndipity>>>
The Analogizer then computes an answer, which is both returned to the user,
and stored as a new part of the knowledge base.

	    → - - - - - →    Context
	   ↑			|
USER - - → |			|
	   |			↓
	   ↓		|---------------|
			|		|
	Inquiry	 --->	|   Analogizer	|  --->  Answer
			|		|	   .
			|---------------|	   .
				↑		   ∨
				|		   .
				|		   .
			     Knowledge		   ↓
			       Base     ← - - - - ←


Let's now consider the types of question, and expected actions and responses.

I. Analogy Finding (or Analogy Determination)

This takes a general question like
	X is like a ?,
together with a statement of the purpose
	e.g., to explain X to person P (in terms of things P might understand,)
and returns a concise statemetn
	? = Y.

As a side effect, the fact that X's and Y's are both Z's should be stored away,
available to assist subsequent inquiries.

II. Analogy Rationalization
Tells why "X is like Y" -- ie the answer will (always?) be of the form
"... because both X and y are Z's".
(For this task the Purpose is optional.)

--- The ones below are clearly subtasks of the first two primary objectives, above.
--- However, they may be used as stand alone procedures, just in case.

III. Analogy Ranking
This determines whether X is more like a Y1 than like a Y2, or not 
(in context P).
(Perhaps it could simply give a numeric "goodness" weight to each of the
"X is like Y1 (in P)" and "X is like Y2 (in P)" analogies.
Note such a value might be negative, if something is a horrid analogy.)

IV. Analogy Incorporation
If the proposed "X is like a Y (in P) [because Z]" analogy is acceptable,
that fact is stored in the Knowledge Base. (Here the user is told "DONE".)
Otherwise the user is told why this analogy was unacceptable.
	(Goals, con't)

Nh∂nA1KhOf↓G←]g%IKdAβ##∃βL¬g&/-l⊗g~
xbπ&≡B∧∞l≥F}>∨(Wαε-␈β@H(8\<[⊗$
=λ⊂⊗]yz⊂$_{2P H q9j≤αactep∧AE←βA1β←FK∂!βneβO,ε'6*⊂λ∞l<Z0∩]<P7`&
fuNctio@9bT
∃→Seghαaβ∨'4+9αa∧;⊃βλ∧ππ<Xλ /pπ@∃αα∧αG≡
x
-Lλ∩(,(≥4m≥Yh⊃m|9λ⊂∀→y2←P≠y⊂1`/npext?),
(ie Alh∂]NαβS#∃∧c↔M∧@∨≡|8
,≡→9⊂≥tz4∀∀@
@%hAgk≥O@↔O'→βπ→∧∧⊗↔∨N,⊗∨&≥xDλ+C"AQU~~.P27`%pεA]←β!βO?e3∃βSF)βO#|¬F*π∞-v⊗F]Pλ
|H_;L≥≠y}%A X<d
=λ⊂∀\β no@PAGYK¬`AQ←β9βS=∧∧fNvDλ

(_<∞A ¬3 !lpJA=H	αA∧εvFN=π<αy0⊗@∃`
βSzβπ;πd{⊂≡O,T¬BπMtπ≡}\T¬JpQ!PD∞mx

<H⊂⊃[w9t`$eration: @AK←aYα)β/≠&+9β∨,εBπ≡\Lv.α¬P$8π2 only consIder @
KeiC%\∩+&K7.β\p∀[w9P'Yα abstrac@QS←\\α↓↓"_Tε
ε≥Vn<H⊂∀\β us@∃HAi↑↓QShAαsπ'L5@hV≥lBεO4	f␈"8mnz9→.,9λ⊂⊂\β a @]←←IK8@QC]⊂AaQKβ∪↔|,Rε↔↑-f∞⊗LU∩ε↑-(	,>K@↔
F@
C@1CS@5β$π&F≤4εO~,V≡∂↑8	$∞→;p_≠2P7`&te`≤AαCπ[∃∧	βO↔ ∧ε}H≤⊂→→T¬de@→S]KHαβπO'∪π∂SL¬vw_Q%π>F≤8
∧∞→<]≥8π⊂ %ithe@HAa↑Aβ≠C↔∂L∧fN~
xλM8⎇≤d¬→9`⊂≥4π hammers)(
@=`	βSzβ∂πK&'9β>+;↔K∞aβ∂3∂≠O/4¬εN(≥≠d∞≠{p⊗≤β in↓H∂πlXL≥
.c!0p∞d↓aKOaαc∃β∨zβS#K*βS#↔≤∧Rε6α<\nD	+ ⊂~w⊂3 !ct$ @QQJAaIKgK]MJA←L↓iQKgα)βC↔v!βS=∧¬⊗vF≤-↔ @ ¬0 %ople d¬eOZ↓iQS]-S]NA¬EWkhαβ?S#,ε"πε↑.7ε.>M↔6/5x⊗↔∨N,⊗∨&≥yg
@λ≥z
≤zλ∪,≥y<c!∞~→2.⊂897X42v@-soluTion Task more Diffi@
kYh\4∀∪∨L↓G←keMJXAi!KeJA%`
β;zβK↔π≤{9βSzβ3'↔LεBε←↑$ππ⊗|}&∞j∞Mrπ&
~0hVLXfN≡≤]f∨J¬URεF}|W6/%Dπ&F↑,RεO4
6}nTλV≡}myWJε≥`hR
∃∩ε∂>>VnNβY`∞M→(∀↑\|⊃,>~=Y$∞z~0⊃Z⊂;wy~rr⊂'[1rP+Zv6⊂9]4v6⊂≥wy5P≠0z2yβE∀4tJP9z7\84w3H:42P≤rpy1Z⊂37iλ72{P_q9z9_qz4w[⊂0s:→y⊂:4→P34i≤z⊂32]P0z:→vx:9CE	have f@¬SYKH8~∀ZZ↓oJOY0AgKJ4Z~∀~))QSf↓βEgiICGi←HAgQ←UYHAE∀ACEY∀Ai↑AQKYXA!←nAO=←H@Q%JAQ←\ACaaI←aeS¬iJR~)g←[J↓CEgiICGiS=\ASfZZAOUSIKH↓ErAB↓gKhA=LAQKUeSgi%Gf\~(xxxA$AiQS9VAiQ∀AS]G=ea←e¬iS←]LA←LAQQKgJ↓QKke%giSGLASfAQQJAm∃erAQ∃Ceh~)←LAi!SfA←YKeCY0Aae←)KGh\Aβ\A%]SiS¬XAgKPASfA%]GYk⊃KHAY¬iKdA%\AiQ%fAae=a←gC0\R~∀x||~∀4∀ZZZ↓'kEi¬gWfA%]GYk⊃J~∃→=GCiJ↓CYXAAeKIK→S]KH↓CEgiICGiS=]fA←_A0\~(∩QSJ↓KYS[%CiJAM←[JAAe←af0AC]H↓gKJA%HAiQ¬hACEMieCGQS←\A%fACYIKCIr↓S\Ai!JA↔∧$~∃%CQJ@EO=←I]KMfDA←_AAαA%fAC\↓CEgiICGiS=\A←L↓0N\~(~∀xxpA∂Sm∃\A0A¬]HAC8ACEgQeCGi%←\A←_A0XA∧\A≥←\AMS]⊂AiQJ↓CEgiICGiS=\A←L↓2ACY=]N~∃QQSfAMC[JAMKhA←_ACqKL\@A⊃=nAi↑↓eKae∃gK]h↓iQJA¬Egie¬GiS←8AeKY¬iS←\0Ai↑AAKe[SPAiQSL\~∀The bulk of the research effort will be devoted to the task
of determining what makes some analogies appropriate and others laughable.
There obviously can be no "algorithmic", or "universal", method
for deciding this issue.

Reflecting this idea, the final program will have two parts:
One part will use a set of pre-enterred heuristics to generate analogies.
(These will, necessarily, correspond to certain prejudices of the part of the
author(s).)
The second seperate subsystem is designed to changes this corpus of heuristics.
It will facilitate the incorporating new "analogizing and abstracting heuristicq" 
proffered by subsequent users, within the guideline that these applying these rules 
results in an acceptable analogy.
{ftnote: Eventually, perhaps, another Part od this system may be the source of
these heUristics --
driven by empirical observation, or past successes, Op modifications of other
a`→[=ghAgUGGKgMMcXAαC↔WKO≠S'∂~q$4*LεBεO4
FFO46∂ε≤-⊗fOO∀π>F≤=απ>≥IBεNn>W⊗*∞Mε*εM⎇f*πLZ&jπm_⊗⊗NM~GHh-xbπ&
~2ππ-|w⊗∞β+C!↓I9=~
|λ→Sn⊂≥εE⊗c7y≥42P 4ime bei`≥Nαaβ←'daβ∂π,∧6F*X
L↑↑=~
≥Yh∩-d∀S∪∂∀≠[p~_z4wgλ⊗VP"~yqzi\tp∞g
λq`≥Sβ#@~ε≥lBπ≡MzG5λ_;LD	KCEEKK@⊂∪0z2iλ;tv , expand, @%H	β;,∧6/∨<≡'JUβ#"E⊃"Uz∂∀~<h∧≠~2lT⊗/c!!1Z0↔→⊂!"iU⊂0q9]90q`4ion, A, above X a`≥Hααeh∀R↓α∂?w≠'&↑ ¬∀,HZdUDλ	L\=→0→→yP7`& X (resp Y)
α	Debaniti`≥C0A←]Kβ→βπK*β↔O h &nv{S∃βL¬g'⊗≤<↔&.O∀ε&.m→f."
xL↑h≠⊂∀Zβadi th∞AE∀AcgKα3W19rrt4(L{;3E¬~⊗&εu"&
β|s↔M↓jiβ'∃α∩?S@β↔;S/∪C↔⊃ε∪eαL]f∂"$	↔
B≡Bε⊗↑:Bbπ|\⊗ZpQ!⊂N}mK∩ε∞<8	.∞_8[T_<h
L<⎇↑9<|aQQ[p→λ897z≠z<x"\β:~∃→←dA[¬iGQKL@ZAYα{?-β∂!βO↔ β?→β∞c3 ?|\Bπ≡M}G4λ≠yD∞≡<∩,<;λ⊂⊗Yvq2yβE	I@⊃K]iSQrASf↓EKgh↓a←gg%EYJB4∀∪'C5JA%C9OK)sAJ@ZA∃H
α∪O≠Qβ?2αC↔∨∧c∀4(M≠π7∃∧3?K↔∂!↓5βL∧Rπ≡≥\Rε∂-~GJα∞|VfBDλWGε]n6N}d
v $~=
!Q@:<m⎇;|\

8h∀L≥Yy=∂≡→(%T→9`-⎇~λ≡Y(≠
≡⎇λ∪lD≤x;,T≥≡4T_<h∞M~<h∞]Z=AQ@εE∧P2z:2\⊂4s⊂→yyrw≥4pv⊂VP22Y4w4`4ional as Opposed to m`eK1rACgMKeiS9CX~∀$A7]←QJAQ←\AO←←⊂A∪gB↓SfC:4∀∪)Q∃\AgK∀ASLA¬YPA[∃[EKeLAQCaββ↔9β&yβ#π4∧Rπ&
~0hPα5~]H≥<n\;≠≡%D→Z0↔_v6<P→2s0z[:∪εEβEβ∧aw`.clusion

It Iq emanently clEar that no one, today, really understands analogy --
This authoR incLuded∞
The underlying purpose of this fUll res`Ce
PAKM→←ehA%bAi↑↓G←]gQakGh4∃BAM%aghAACgfA¬hABAββπKSN1βπw≠←↔Iπ#=βO.≠!βG.+OS'}qβπ; β'OO.+E↓∃hh+SKNK;≥β&yβ∪↔4K;∃βW+OQβ>CπQβ∞sπ3?>K↔Mβ∂∪∃1β∞s⊃β#␈9βS#/Iβ∂πrβ∃β/≠↔⊃8hRπMβ&C∃β←␈∪%βK/β?KS.!β'9π##'MπβπC↔∩β'Mβ≥#'31π3↔KeεkW∂!∧K9βC⊗{∨K↔∨→04+o+∂!β|1β'Q∧εvNfD&*π]l6f.≡%bα∧
x\];∪∂∀≥~→$∞=94nM;{\d9→≤L↑|y9∧∞z;≠¬A"Z;D
[⎇~
≥Yh→-Ny+λ∞>~<H
}~→<N4≥≠h
\:y(∀≠Y6∞D≤⎇→.¬λKEa"C"Hn=≥4LT→~<L\⎇~;mnc"T∞?(≥→.>~;Yn4(≤l\(~9D∞~~<d∂:9;Nh≥~T≤Y<n]≥≤h∞M~<h
]y→;∧∞{⎇;D≤≤Y,M8⎇β!,9Y|;Y<L≡→(
/(≥~-\8;YeeKJ#!!"@	Glossary
<<<Talk with STT, and ask people like Lindley and Tom D.>>>

Abstraction 
	Really means "is a partial theory of" -- eg some collection of
	(logical) sentences, {s}, is an abstraction of some model M is
	M satisfies {s}.
	Note this says nothing of the completeness of {s}...
	Q: Why "abstraction" rather than "theory"?
	A: Because there may be some thorough, absolute theory of M, call it S;
	and this {s} is really just a sub-theory of that  S.

? Approximation 
	Using a simpler (if erroneous) model,
	for computational ease.  The assumption is that the results
	this model predicts are "close" to the real phenomena.

? Simplification
	Like approximation.  Only here there is an additional constraint:
	that the simpler model not only lead to computational efficiency,
	but also that it corresponds "nicely" to the original.
	(IE that there be corresponding terms, usually)

? Extention
	Standard model theoritic concept -- one model extends another ...

----
Analogy
	Two models are analogous if they share a common abstraction --
	ie if there is some theory they each satsify.

? Reformulation
	? - something about different terms, which collectively
	convey as much information as the original.

? Interfield connection
----

Noise (Spurious, Errorful Detail)
Needless (superfluous) Detail

Sloppy, informal, inexact, inpreceise
Make comments:
∪/!rACI⊂AUKn↓QKke%ciSGL}@A/!rA]←β!βC←N#∪ &T
FF*∞,Wπ⊗↑8	-n_9~-⎇Kβ"M}H≥~T≠xZL\⎇≤h∞⎇~8r∧<Y(∞,<≤Y.≤αs:2Y≥P⊂∀→qP22Xv0y2H9wvr]44w3H2r9`% to
be more *salient*, Leavi@9JAiQ∀@E%C9P
βC⊗{C?K&K?;πbβS :
8⊗fN]l7J∩∞.Vf(Q(
-n≠⎇0m9KEE εEβ∧a4q≠4wsy_x4<FB!0zvYpy:⊂≤:⎇=6→yFE$≠s9z0Y:2y⊂⊗iqtP⊂vnFE⊂pq7w→v6⊂⊗H$e!`RV⊂!wYiqtFBεE"{_w9FE∩v4w3Cth - Metaphors and Models
Hayes-Roth, F & Hayes-Roth, B
Winston
Gertner
Tapple - Thesis proposal (where he defined all sorts of terms)
TW - Primitives, Prototypes
Quine - on Obstention
Wittgenstein - "Games", family resemblence

Darden - Personal communication
Minsky - frames


	Outtakes
eg, in the context of Shakespearean plays,
the nearest play to "Romeo & Juliet" might be ?,
whereas in terms of overall story plot, "West Side Story"
is an obvious selection.)

interaction ... all we have is a starting theory.  Can the analogizer
ask questions to attempt to flesh out the facts needed complete a match?
A: Sure... can be written into a heuristic.

There is nothing in the analogizer which has that screening function.  
However, there are heuristics which
guide the probing; and these provided the information...

by mapping the interpretation of a symbol in one model onto its interpretation
in the other.

----
So now the problem is set up:
The INQUIRY mentioned in Figure 1 will be two of
<Object1, Object2, PartialTheory>,
and the Answer will be that omitted member od the triple.

(It does get a bit confusing when you consider that both Objects will be
represented as theories as well, but oh well...)

Now to address that context mess.  This colhecTion ipεAGkIeK]i1rAB~)ECNA=HA[SMGKYY¬]K←kLAG←]⊃SiS←8AoQS
PAgKImJAi<AOkS⊃JAiQ%fAaCIiSGk1CdX~)S]ISYSIkC0AC]C1←Or\A)QKdAoSY0AEJAQQJAg¬[JAieaJA←α1β#↔/∪'OSN≠Eβ←FK∂ 4VK∃βπ∪↔O↔w!β'9π##∃βF+WK'∨#'∂M∧Z	1β>C'∂!π;∃∨3bβ;?]ε#'O∂/≠M84Ph(&∂/#∃1β↔+QβWw+O↔⊃ε+cπ7εc∀4(hRS#↔⊗)βπK*β?S#/⊃β7↔≡Cπ;'≡kMiβ6{Aβ↔F7C3*aβO?n)βK↔fS'?rβ7πeε∪∃β∨.s↔KπfKk↔⊃ph*¬β≡K7C3*β↔cπoβ3∃β>{W3⊃∧∧&*ε⎇y⊗v:n&}jλZ∃,D
Fz∧},V∂&↑*FF∞e]w∩l↑≡V∞bDλn∧→\[mQ J≥
(≤≤L\~8x.L(→→,M;Z0↔→P:42H1v0y\TP'0]8y0v∪8vq2\⊂:7P
8∧he predicate fkr)
I`≥i∃OCd\A7∞↓C\AC	gieC
iS←\↓←LA!IS[K≥U[EKd↓[SOQPAEJ@!iQJAQQK←edA←LR↓S]iK≥Kef~)oQSG AQCm∀AkMI∃`AiQIKJAaIS[JA⊃SmSg=af@Z4A←dA∃mK\AQQJAGαcπOMε{⊂∩ε|-&.∨N4π>F≤=αε∂,QPV∂><Vn⊗L\Bπ/=→f:β$λ6}o
yf.wN5Bε↔]→G"π↑9⊗v:∀π≡Nl⎇F*ε}Wε∂Mz"`h)zFF/$λ
-n⎇_;L<<h≠l@⊂:40]⊂60z≥2y⊂ 4heory (in a`dition to prim`fR↓o←kY⊂AEJ@4∃eKG%aKfA]QSGP↓eKck%eJA[%qS]NαβWQ¬#←=βNs∨K↔&K↔;S~`4+?∩β∂?;W+;∂S~β∂?;&';'v9βW"βS←=εS?7L→β∂3∂+O↔Muh4(∀U##∃β|εFF/$λ	M}Y8;
≡{+λ∞⎇~8z∧∞{|Zn4_↑(¬<|y-n~8;
O*(λNM≤[⎇m≥Yh_.|>(H∞={9(∞∞[|→.∞~9<eA"Xx-d≤Y8,M;≡(L8;λ∞⎇=~λ≤\⎇≤L≤⎇~;mnh→[n-99∞↑z;Yd∞~_=∧X<\nD≠90⊃Z0s4i[W
It is diffIcultth∞Ag∃JAQ←\AShA]←kYH↓QC]I1JAGCMKfAo!KeJAM←[JAAeKIS
CiJ~)SfAo∃CWK]∃HXA←HAC]r↓←LAi!JA←i!KdA[∃iQ←ILA←LA¬Egie¬GiS]≤\~∧Q%JAoQ¬hAae=aKei%KfAI<A!eS5K≥k[	KefA¬]HA¬%]Cer5∪]Oe∃ISCi∃HA%K
SaKf↓cQCE∀}~∃πα+C@&≥→fgJ	iu"πMR∧w]\&/∀|hf∞∨Mz'4≤≤[n<]≡%D≠[tD∞~→(
≥Y|Y,M9;]∧XP9f≠z↔∀FBεA+rH6pst≥⊂9p|CE∀ v≠⊂<↔
&rvq→y⊂<⊂∀94vrS8vq2\∀P⊂⊂∂←⊂∀!Xy24w_v4z<H∀#0q]4πrs x)) = 2¬
  matches
(All x. (MembeR x Binar`3IKGSa∃bR@zβq↓"∂∂∪∪'≠∞c'SEαC';∨⊗+∪'↔w#EβAJI↓e↓α!PV↑mzvNvt
FF∂Dλ&␈&∧λf∞∨M}'4_;Y∧
;Y|L\~9;NNh_<LT→];L>~;{N∀≥z~,=λ≤Y.Nαy7εB∀9wvYz44w→P64uYTP:4→P9r`4 kf ConstitpK]PAaCeQbA←L↓iQKSβ⊃βπK?+7↔nE`hPβ"U
(≥X,}9;Y.≡h≠qD∞~→4lT≥→<M↑h+$∧Y;[n\zλC∧∧\≤[n<]~,↑hH⊂-lλλ[,≡_zD¬+#"N><≤≠∂∀≥~→$[→6
≤Z;~.O(≠Y,\→9∞Mh≤|≥H_ ⊂≠:vq2\⊂7s⊂≥<x2yH7s⊂0[0v7s↑WεE*~4yP9]x2y3~qtpvλ77z4[w⊂4f\64ryH:40zλ22z2\6tw4[3P0wλ0w0v≠s|P4[;7v;→yFE9Zvx6,H1wvx_y0w3H:42P≠q;4g]yP32Xz:y2\Ux97\2y:4YyP7sλ:42P≥;wP8≤7x7yYr⊂0w_v7szYy]FE_w2⊂2→qv0y~w3P:~2vP0[0v7s[zyP4Yα this partial match exceeds some Theshold.

----
αHaving↓YC@' βS#∃ε;K?Wv!β←?⊗Yβ←'&AβS#∂!βKN+⊂∩ε≤λ∞-|~0⊗Xz4ww≤P0q7]2VεE≥rP1`[⊂77kH12P0H14r⊂≠wy2P→4πrmal inour @⊃KMS]%iS←]L\~∀~)↓'kEMKGiS=\Q≥←QCiS←8XAC]⊂A	KM%]SiS=\R
∃¬fA]←QKHAS8AiQJ↓CE←m∀AISg
YCC[∃dX~∃¬]CY←≥rASf↓BAa←=eYrAU]IKeMi←←H0@QC]⊂AQK]
JA←m∃ecgK⊂RAiKIZ\~∃=]JAe∃Cg←\↓SfAi!ChAC9CY←O%KfAG¬\AgKImJAg∃mKeC0XAGY=gKYr↓eKYCQKHAaUea←g∃fP~∃M←[JA=H	β←FK∂!β∂∪∃β↔w+7πK∂#↔⊃β⊗+3/]ph*π9ε;π3};eβ'~β¬βK.cπS'}qβ←#L≠!β#|¬F'~,W'y90↔λ0P80Zy⊂7`& thiNes -
egbetween two objects, eve@9ifHAβ≠'SW∂#'?;~aβ?I¬;#πS/3↔I8hRS#∃ε;π3};eβ''≠↔3→εK@~εLZ6∨⊗≤,V"ε≤4ε
ε\≡πεNlpλ,=≥y,]H≤⎇,._<]∞P7s⊂≥42ybH:;wFB4πbj@∃Gif\4⊂~∧Q]JAoS1XAkg∀ABAg1SOQi1rAIS→H↔K.sQβ∪,3';'&K?9β|∧bε∞l≥F}?∃Dπ/≡≥lphWLZ&nNmx
|}(⊂M}\[⎇l\λ→TM⎇(≠;lL;=
;|Y.M8h≠
|z8kAQU≥{d∞~~;L}h_x-d_Y(≥X;≠l]⎇<h
⎇[≡(
≤H≥~←(_<LT_[⎇
∧≠;y]≤h≠l@⊂:42CE7s⊂≥42P9XvrP8_y:4p[⊂:42[y<WεB+rP+Zv6⊂2[0q7y_z2P*~4yP8≠tw:⊂~w⊂:4→P72l≥⊂9rq]4ww↔
FEεEλ⊂$k↔⊂↔⊂!0\tqP(≥y87`3e, and Expla`≥CQS←\@!M←dA]QrASPAo←e-fR~∀4∃β]C1←OSKLAGC\↓cCemα)β↔'&C↔Iβλβ3'≠?+'OSL→β?Iε	βK↔¬∪↔O↔w#πS'}sπ1β7+;∂SN{984U;∃β←Nc1β≠Lε'∨"9vw≡≤LW∩πMRεf≥lw.O>M⊗~εn]f∨&≥⎇bbε≥`π>F≤=hV∀
7ε.≥<W∩π↑<W4_;H≥X;≠l@|P:7H1wvv]w4qp]2FE H1:w2≠2P7cλ30qz≤β quickly tk the heArer.
TheY thus facIlitae the communication Od∧AG←αkC#↔@β∂?;≤+CADλ↑Y:0~≥4p∞g
λthiS to @	JAI←9JAeCASIYr↓C]HAα+∪≠'≤¬⊗.wMK∩pHαQP↔\α example, the pπS@7εc∃βO&S↔↔,sQβSFQ↓,c↔∂S⊗K∂'SJβK↔O,¬V⊗F↑4ε6@≥:1∧¬→[∪nu(C"N,8;∪∂∀→;Xm|→<h∀→xY,≡λ→→,≥λ≠qD
9YP↔\4¬ation about @∃YKGiISGSid@QC]⊂Aa←gMSEYR4∃O←S9H
βSF)βOπJaβπ⎇+Qβ≠dεVNαh
}h_<d∞y;⊂⊗
TεE 7hich↓QKIaβ→βS#*β#↔π⊗+@∩πMtαπ9Y→.∞⎇_;LDH→0⊗→qz94Xtz4WβE∀*4~yP4`.dormationmay Be us@∃HAi↑↓g←Ym∀AKIK
ieSF↓ae←E1K[f\$~∀4Sc#↔K+p4*¬∧ε&/π,Z6.wL≡FN}l≥Bπ<y(
|H_;L≥≠y}$
<h⊂≠Z2y2FB9wvrH3ryj_v:⊂ )pεAgi=eKHAαK9βS-∪7Mβ|∧bε∞dλ⊗v∞M|wJPβ"U

<h≥.≤9y(∞<αrvyH1:z_P9za_pyrP≠q⊂:4→P64g→zty`4ic one.
We chosa to regard this c@¬gBACLABAGα{77WvK∂πSL{904Ts?Qβ⊗+S@>\XD(≤_-≡H≠p∪λ82w`0le, but rather
 imagi`≥J↓ig↑AαC?7?v≠W3'J↓# ?$λλm⎇<≥=↑K8u-M:*#!1wvv]w4q`]4s3@ - where the fi@IchASL@EKKαk?Ke⊂∧ε∞vD
FF*∞8V≡}l@εO~∞Mε*πmx
,<+8<NEA"UlT≥{u-L≠I⎇∧Y(→\9~3L@P;t`4h that, althoegh a @MieCS≥Q`∪≠|ε'>∂,@λ
\<≤⊂∀[3P1`/uld
be made (rathe@HAiQC8AaKeM←\Aα↓iP≥β∧+@↔≡⎇`∧%λ~3,≤z;Y$∞≥{h

{;p↔_zv4dH∀7y_wvx:]2y⊗q]v4tTCE1wf[zw4`#ating - where The f@%`OQ∧K@~α-\Vn␈/∀"ε∞l@λ∞M→(≤l\{{Y∧
<h∃
(≥[m≤y+8.∞λπεEβE#7`2 thiS appRoach th∞Ao=aVHAβ##∃βε+@↔≡⎇`ε}r∞Mε*α.,V≡.≤i⊗v:Yf"∩
x	D∞~~0→H9z0z→vrw:βE2zy]⊂27P_P3q2Xz⊂22Xv⊂7cλ4w3 %pencIng --
`e must firsT "looi up" his corpus of facts about flUid flow,
thentk decide which propertiepεAgQ=kYHAα≠πKKJβ?[↔∩βS=β&C∃β∪}kπ'→ε{⊂∩ε]HV∨'-_6O'∃APV∞βYM9X;
O(~→$
=<⎇∧
8<λ∞M→<y$[≥0∀Y⊗y2v_z2r→2pz:\2yP:≠P1wy≤2yx7[24w3H32pz≥y2yFB37y⊂→v2qj≤4qtz≡WεEεB$z⊂4\P8zt]2P9z\894yZw3P4≠{P;r[4∧ people do at each kf these steps.
The first and third seem, epistlemoloeically, quite straightforward.
The seCond, however, Is next to unfathomable.  
Why is it obvious that↓KYKG of deciding which parts to place in correspondence,
is the crux of intelligent analogizing.  
Most of this reseaach effort will be devoted to
developing heuristics which account, first, for standard "human" behaviour,
and then for reasonable behavior - based on the economy of interface criteria
to be discussed below.

<<DIGRESSION>>
There can certainly be analogy-like behaviour between a pair of machines,
as well as pair of people.  
The only requirement would be the same basic purpose -- where some few
bits are transmitted to convey a much larger concept, based on the
assumption that the recipient
will be able to infer the needed additional facts (thereby fleshing out the
model by finding the corresponding mappings.)
<<Note: here using an abstraction of the idea of analogy.>>

-----
Here, from SCENAR
<<Ignore this page.  I think the approach given on the next page is more credible>>

KA: I know/infer that M-F is a cursor positioning command,
and that forward implies the cursor position is incremented.
By how much?

U: Distance(Cursor-Position End(cur-word)).
<<<NO NO NO -- really goes to a character after the NEXT word, if NOT in a word>>>>

KA: So M-F means
    Cursor-Position := Cursor-Position + Distance(Cursor-Position End(cur-word)).
		(unless at boundary).
Correct?

U: Yes.

KA: C-B means
    Cursor-Position := Cursor-Position + -1.
Shall I assume M-B means
    Cursor-Position := Cursor-Position + Distance(Cursor-Position End(cur-word)).
		(unless at boundary)?

U: No -- use Start(cur-word), not End(cur-word).
[Note the quantity added will be negative.]

KA: So M-B means
    Cursor-Position := Cursor-Position + Distance(Cursor-Position Start(cur-word)).
		(unless at boundary),
correct?

U: Yep.
~∃-αtA≥=nA←]Q↑AiQ∀AiKqPA[←I%MSGCQS←\A
←[[C9IfXA4[λAC9HA~ZqekE←Uh|\@~∃∩A!CmJA[λA[∃C]S]≤~∀&&+3↔S,≠#πK~B∂WK≤{I6C␈≠'S'}q↓E%ph*'→∧IβπO∨+7∃β&CπQβ6{K←π⊗!β∪↔d∧W&N⎇dε∞∨N4εfN<Tε6←.|↔⊗"
]w6.\Yg"`Q)∩ε>↑D∧jlD
V.∞m→f8h!_F.f↑LT≡F≡.2D∨↑.6␈∩Z
w≡OM≥vr∧M≡7&∞l<RD∨↑.6␈∩Z
w≡OM≥vr∧]lBF∨↑%W>␈,E∩JJaQ$O~∞MεO~=w↔⊗\>Cxh!Q%+R≤W~pQ!PT\↔$¬≡F}]F"∧∀
6Nn≥L↔⊗g∀	⊗v6↑$π&F≡D∧jkN.V⊗␈↑Gbεn\≥g_h!_F.f↑LT≡F≡.2D∨↑.6␈∩Z
w∞OM≥vr∧M≡7&∞l<RD∨↑.6␈∩Z
w∞OM≥vr¬>L↔↔":W∩o⎇}&"J∃∃`hPQ*SR¬≤↑2ph!Q$\$λv␈"∞Mε*εm}&o~d∧∧v←tWGεL≥⊗rπMRε∨↑%W>␈,Dπ6∂-≤⊗⊗fUDε∞vD∞vF∂D
FF(Q*7&∂.@ε∞vDλVv"nVv∨M≥vw~Mr`h!Q%+R:W∩o⎇z&"ε≡4π&FT7/↔,]g"π⎇}&"b≥f∞f|⎇w/~∞Mrα⊗>↑'ε.nDε≡F≤∧X,>→<HEa"C"I8,¬⊂'ZTε It's easy to see how the cUrsor's position determines the current Character
-- as it @A←S]iLAi↑A∧AgS]≥YJA←9J\~∃]QChA⊃←KfA%hA[K¬\AM←HAiQJ↓Gkeg=dAi↑Ea←S9hAi↑↓BAo←IHD}~(~∃*t↓Gkd[]←eHA%fAiQ∀Ao←e⊂AoQS
PAG←9iCC]LAiQJ↓Gkee∃]hAG!CeCGQKd\~(~∃↔αhA∨V\4∀~∃*hA/QCPASfAMiCeh|~∀~∃-αtA'QCehQ]←eHR↓SfAi!JAa←MSiS←8A←LAQQJAM%eghA1KiiKHA←LAQQJAgQeS]N↓←LAG!CeCGQKef~)oQSG AM←e4AiQSLAo←e⊂\@A9HQo←IHRASLAiQJ↓MSegPAGQCICGiKHAβ
)∃$AiQ∀AK]H↓←LAi!JAo←IH\~∀pxy≥≡↓≥≡A≥<@ZAS_AS\A]←eHAQQK\A%hAeK→KefAQ↑AiQ∀AgiCIhA←L↓iQJA]←eH\4∀∪∪L↓ChAgACGJX↓iQK\↓Uk[aLAi←J↓!%∪∨HAo←e⊂\~∃↔∧tA≥←QJAiQ¬h@b@tAYK]≥iPA←_AGkeIK]hAMS]OY∀AGQCICGiKH\~∃↓	∂∪≤!≠kYi%aYJR4∀E%KAeKgK9iCiS=\D@QMiSYX↓k]KqAY←eK⊂ST~)↓¬∂%≤Q∪i∃[SuJ0A'ae∃CHz`$~∃↓D!≠←ii<tRA↓$Q≥←\5IKG←5a←gSQS←\A]CrA←_AeKM∃eeS]≤Ai↑A∧\R
∀4∃↓DQMGK]CIS↑tRAαA↔λAK]G=IKfAQQJAO∃giCYPAαAEdAS]I%GCiS9NAiQ¬hASh↓SfAY%WJA∧8~∃∪h↓a←gg%EYrA%]GYk⊃KfAi!JAW]=o\Ao¬rQfR↓S\Ao!SGPA∧AC]H↓∧ACe∀AgS[%YCdX4∃CMH↓Q←nA⊃SMMKIK]h\4∀QphAgi←IS]NA→CGhAQQChA∃YKGiISGSidASfA1SWJA]CiKd↓IY←n8R~∀~)↓DQ!Uea←g∀tRA
¬ghAY=←VAk@AWLAλ[YSW∀AiQS9OfHA→CGSY%iCiKL@b\A¬]H@d8ACE←YJP}R0~∃CY1←nA[UYiSa1JAeKAeKgK9iCiS=]fA←_AC\A=EUKGP@QgK∀A↔%_↓aKegAKGiSYJRX@8\\~∀4∃↓DQMkEGCMKftRQRRA∧AgCi%gMSKLAg←[∀A←LAQQJAKEkCiS=]fAo!SGPAλAgCi%gMSKLX~∀Q%RRAα↓KqQS	SifAMS[SY¬dAEK!CmS←UdX@Q%SRRA∧ASf@	W]←o8DAi↑↓QCmJ↓G←ee∃ga←]⊃S]N~)S]iKI]CXAMiekGQkeJ@!ChAg=[JAY∃mKXA=LACEMieCGQS←\@4ZAgK∀A[C\5β∩R\8\~∀QUgJA←_Aae←Q←isa∃fASf↓gS[S1CdAi<AiQSLvAgK∀ACYg<AkgJ↓←LA[=IKYf8\\R~(~∃↓D!!eKG=]ISi%←]ft$@A↔∧↓[kgh↓QCmJ↓g←[J↓G←eaUfA←L↓OKgi¬YifX↓C]HAAeKiidAMC]
r~∃S9MKeK9GS]N↓gGQK5JAM←HAkgS9NAiQ%fAEr5C]CY=Or[S9IKqS9NAgG!K[J\4∀~∃↓λQ≥KCHA[SgLtR@AUgS]N↓gS[a1JAIK
←[a←MSiS←8AgGQ∃[JXA%\AoQ%GPAi!JAMK¬ikeKLA←L~)αAC]⊂A∧@Q¬]HAi!KSdAACeiS¬XA[CQGPRA¬eJAKaaYSG%h\~∃QQSfA%fAi←<A←Em%←kfAQ↑AEJ↓S]iKIKgiS9N@QQ∃eJRX↓C]HA%fA]↑↓Y←]O∃dAOKMiCYh↓[CiG!S]N\4∀~∃↓λQqC5aYKfhR~∃↓∃≥λQ∪QK[Su∀R~∃↓∃≥λQ≠UYiSa1JR~∀→↓¬≥∪≤Q≠UYiSa1JR~∀	%Kae∃gK]i¬iS←\λ@Qgi%YXAk9KqaY=eKHST~∃↓	∂∪≤!∪iK[%uJXAMaeKC⊂z`R~)↓DQ≠=ii↑t$A↓RQ9←\[I∃G←[a=gSiS=\AoCdA←LAIKMKeIS]NAQ↑Aα\$~∀~∃↓DQ'G∃]CeS<tR@A∧A↔∧A∃]G←I∃fAiQ∀AOKgQCYhA∧AErA%]ISG¬iS]N↓iQCh↓ShASLAYSW∀A∧\~)∪hAa=cgSE1rAS]
YcIKLAiQJ↓W]←o8AoCr!fRAS8AoQS
PAαA¬]HA∧↓CeJAMS[SY¬dX~∃¬]HAQ=nAIS→MKeK9h\~∀!ptAMi←eS9NAMC
hAiQ¬hAKY∃GieS
SirA%fAYS-JAoCQKdAM1←n\R4∀~∃↓λQ!keA←gJt$A
CgPAY←←,Ak`A=LA∧[1SWJAQQS]OLXAMC
SYSi¬iKf@D\AC]⊂@d\A¬E←mJ }RX~)CYY←\A[kYQSaYJ↓eKae∃gK]i¬iS←]LA←LA¬\A←E)KGh@!gKJA-%_Aa∃egaK
iSmJ$X@\\8~∀~∃↓DQ'k	GCgKLtR@Q$RAαAMCiSg→SKfAM←[JA=HAiQ∀AKck¬iS←]LAoQS
PA∧AMCiSg→SKfX4∀QSR$AαAKaQSESQfAgS5SYCd↓EKQCYS←kd0@QSS$RAαA%f@EW9←o\D↓i↑AQ¬mJAG=eeKgA←]IS9N~∃S9iKe]¬XAgiIkGikIJ@QCPAg←[∀AYKm∃XA←L↓CEgiICGiS=\@ZZ↓gKJA5C\[β$R\\\4∀Q+g∀A←LAAe←i←QsaKf↓SfAg%[SYCHAi↑AQQSfv↓gKJA¬Yg↑AUgJA←_A[←I∃Yf\\8R~∀~)↓DQ!IKG←]⊃SiS←9ftR@↓↔∧A[UghAQ¬mJAg=[JAG=aakf↓←LAO∃giCYQfXAC9HAae∃iirA→C]Gr4∃S]M∃eK]G%]NAg
QK[J↓M←dAUgS]N↓iQSf↓Er[C9CY←Od[S]I∃qS]N↓cGQK5J\~∀4∃↓DQ9KCdA5Sgft$@A+g%]NAg%[aYJ↓IKG←5a←gSQS←\AMGQK[∀XAS\↓oQSG AiQJ↓MKCiUeKfA=L~∃α↓C]HAλ@QC]⊂AiQK%dAaCIiSCX↓[CiG RACe∀AKqa1SGSh8~∃)Q%fASf↓i←↑A=EmS←UfAi↑↓EJAS9iKeKMiS]NQQKe∀RXAC9HASf↓]↑AY=]OKd↓OKgi¬YhA[¬iGQS9N\~∀4∃↓DQ∃qC[a1KftR4∃↓≥⊂Q∪iK5SuJR4∃↓≥⊂Q≠kYQSaYJ$~∀